LoneStriker
commited on
Commit
•
6572cfa
1
Parent(s):
b5f2add
Upload folder using huggingface_hub
Browse files- README.md +137 -0
- added_tokens.json +4 -0
- config.json +37 -0
- generation_config.json +8 -0
- model-00001-of-00004.safetensors +3 -0
- model-00002-of-00004.safetensors +3 -0
- model-00003-of-00004.safetensors +3 -0
- model-00004-of-00004.safetensors +3 -0
- model.safetensors.index.json +0 -0
- special_tokens_map.json +30 -0
- tokenizer.json +0 -0
- tokenizer.model +3 -0
- tokenizer_config.json +59 -0
README.md
ADDED
@@ -0,0 +1,137 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: yi-license
|
4 |
+
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
library_name: transformers
|
8 |
+
base_model: []
|
9 |
+
tags:
|
10 |
+
- mergekit
|
11 |
+
- merge
|
12 |
+
- Yi
|
13 |
+
- exllama
|
14 |
+
- exllamav2
|
15 |
+
- exl2
|
16 |
+
---
|
17 |
+
# RPMerge
|
18 |
+
A merge of several Yi 34B models with a singular goal: 40K+ context, instruct-enhanced storytelling.
|
19 |
+
|
20 |
+
Disappointed with some quirks of my previous kitchen sink merges (like token/instruct formats from various models showing up when they shouldn't), I've gone 'back to the basics' and picked a few Vicuna-format only models:
|
21 |
+
|
22 |
+
- [DrNicefellow/ChatAllInOne-Yi-34B-200K-V1](https://huggingface.co/DrNicefellow/ChatAllInOne-Yi-34B-200K-V1) and [migtissera/Tess-34B-v1.5b](https://huggingface.co/migtissera/Tess-34B-v1.5b) both have excellent general instruction-following performance.
|
23 |
+
|
24 |
+
- [cgato/Thespis-34b-v0.7](https://huggingface.co/cgato/Thespis-34b-v0.7) is trained on the "Username: {Input} / BotName: {Response}" format, to emphasize it in the merge (but not force it). It also seems to work for multi-character stories.
|
25 |
+
|
26 |
+
- [Doctor-Shotgun/limarpv3-yi-llama-34b-lora](https://huggingface.co/Doctor-Shotgun/limarpv3-yi-llama-34b-lora) is trained on roleplaying data, but merged at a modest weight to not over emphasize it. This is the only non-vicuna model (being alpaca format), but it doesn't seem to interefere with the Vicuna format or adversely affect long-context perplexity
|
27 |
+
|
28 |
+
- [adamo1139/yi-34b-200k-rawrr-dpo-2](https://huggingface.co/adamo1139/yi-34b-200k-rawrr-dpo-2) the base for the limarp lora, this is base Yi gently finetuned to discourage refusals.
|
29 |
+
|
30 |
+
- [migtissera/Tess-M-Creative-v1.0](https://huggingface.co/migtissera/Tess-M-Creative-v1.0) and [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B) are both "undertrained" Yi models. I find they excel at raw completion performance (like long novel continuations) while still retaining some Vicuna instruct ability. This may be why some still prefer the original Tess 1.0/Capybara merge.
|
31 |
+
|
32 |
+
I consider this a more "focused" merge that previous ones. I will investigate other models (perhaps chatML models?) for a more "factual assistant" focused merge, as well as a coding-focused merge if I can't find one to suit my needs.
|
33 |
+
|
34 |
+
|
35 |
+
## Prompt template: Orca-Vicuna
|
36 |
+
```
|
37 |
+
SYSTEM: {system_message}
|
38 |
+
USER: {prompt}
|
39 |
+
ASSISTANT:
|
40 |
+
```
|
41 |
+
Raw prompting as described here is also effective: https://old.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/
|
42 |
+
|
43 |
+
As well as a very explicit system prompt like this: https://old.reddit.com/r/LocalLLaMA/comments/1aiz6zu/roleplaying_system_prompts/koygiwa/
|
44 |
+
|
45 |
+
|
46 |
+
## Running
|
47 |
+
|
48 |
+
Chinese models with large tokenizer vocabularies like Yi need *careful* parameter tuning due to their huge logit sampling "tails." Yi in particular also runs relatively "hot" even at lower temperatures.
|
49 |
+
|
50 |
+
I am a huge fan of Kalomaze's quadratic sampling (shown as "smoothing factor" where available), as described here: https://github.com/oobabooga/text-generation-webui/pull/5403
|
51 |
+
|
52 |
+
Otherwise, I recommend a lower temperature with 0.1 or higher MinP, a little repetition penalty, and mirostat with a low tau, and no other samplers. See the explanation here: https://github.com/ggerganov/llama.cpp/pull/3841
|
53 |
+
|
54 |
+
24GB GPUs can efficiently run Yi-34B-200K models at **40K-90K context** with exllamav2, and performant UIs like [exui](https://github.com/turboderp/exui). I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/). Empty 16GB GPUs can still run the high context with aggressive quantization.
|
55 |
+
|
56 |
+
To load/train this in full-context backends like transformers, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM! I do not recommend running high context without context-efficient backends that support flash attention + 8 bit kv cache, like exllamav2, litellm, vllm or unsloth.
|
57 |
+
|
58 |
+
|
59 |
+
## Testing Notes
|
60 |
+
|
61 |
+
Thanks to ParasiticRogue for this idea of a Vicuna-only merge, see: https://huggingface.co/brucethemoose/jondurbin_bagel-dpo-34b-v0.2-exl2-4bpw-fiction/discussions
|
62 |
+
|
63 |
+
See: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8#testing-notes
|
64 |
+
|
65 |
+
This is a possible base for a storytelling finetune/LASER in the future, once I can bite the bullet and rent some A100s or a MI300.
|
66 |
+
|
67 |
+
I have tested this merge with with novel-style continuation (but not much chat-style roleplay), and some assistant-style responses and long context analysis. I haven't seen any refusals so far.
|
68 |
+
|
69 |
+
## Merge Details
|
70 |
+
### Merge Method
|
71 |
+
|
72 |
+
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using /home/alpha/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base.
|
73 |
+
|
74 |
+
### Models Merged
|
75 |
+
|
76 |
+
The following models were included in the merge:
|
77 |
+
* /home/alpha/Models/Raw/migtissera_Tess-34B-v1.5b
|
78 |
+
* /home/alpha/Models/Raw/migtissera_Tess-M-Creative-v1.0
|
79 |
+
* /home/alpha/Models/Raw/cgato_Thespis-34b-DPO-v0.7
|
80 |
+
* /home/alpha/Models/Raw/Nous-Capybara-34B
|
81 |
+
* /home/alpha/Models/Raw/admo_limarp
|
82 |
+
* /home/alpha/Models/Raw/DrNicefellow_ChatAllInOne-Yi-34B-200K-V1
|
83 |
+
|
84 |
+
### Configuration
|
85 |
+
|
86 |
+
The following YAML configuration was used to produce this model:
|
87 |
+
|
88 |
+
```yaml
|
89 |
+
models:
|
90 |
+
- model: /home/alpha/Models/Raw/chargoddard_Yi-34B-200K-Llama
|
91 |
+
# No parameters necessary for base model
|
92 |
+
- model: /home/alpha/Models/Raw/migtissera_Tess-34B-v1.5b
|
93 |
+
#Emphasize the beginning of Vicuna format models
|
94 |
+
parameters:
|
95 |
+
weight: 0.19
|
96 |
+
density: 0.59
|
97 |
+
- model: /home/alpha/Models/Raw/Nous-Capybara-34B
|
98 |
+
parameters:
|
99 |
+
weight: 0.19
|
100 |
+
density: 0.55
|
101 |
+
# Vicuna format
|
102 |
+
- model: /home/alpha/Models/Raw/migtissera_Tess-M-Creative-v1.0
|
103 |
+
parameters:
|
104 |
+
weight: 0.05
|
105 |
+
density: 0.55
|
106 |
+
- model: /home/alpha/Models/Raw/DrNicefellow_ChatAllInOne-Yi-34B-200K-V1
|
107 |
+
parameters:
|
108 |
+
weight: 0.19
|
109 |
+
density: 0.55
|
110 |
+
- model: adamo1139/yi-34b-200k-rawrr-dpo-2+Doctor-Shotgun/limarpv3-yi-llama-34b-lora
|
111 |
+
parameters:
|
112 |
+
weight: 0.19
|
113 |
+
density: 0.48
|
114 |
+
- model: /home/alpha/Models/Raw/cgato_Thespis-34b-DPO-v0.7
|
115 |
+
parameters:
|
116 |
+
weight: 0.19
|
117 |
+
density: 0.59
|
118 |
+
|
119 |
+
|
120 |
+
merge_method: dare_ties
|
121 |
+
tokenizer_source: union
|
122 |
+
base_model: /home/alpha/Models/Raw/chargoddard_Yi-34B-200K-Llama
|
123 |
+
parameters:
|
124 |
+
int8_mask: true
|
125 |
+
dtype: bfloat16
|
126 |
+
```
|
127 |
+
|
128 |
+
|
129 |
+
## Self Promotion
|
130 |
+
|
131 |
+
I'm part of a AI startup called Holocene AI!
|
132 |
+
|
133 |
+
We're new, busy, and still setting things up. But if you have any business inquiries, want a job, or just want some consultation, feel free to shoot me an email. We have expertise in RAG applications and llama/embeddings model finetuning, and absolutely *none* of the nonsense of scammy AI startups.
|
134 |
+
|
135 |
+
Contact me at: [email protected]
|
136 |
+
|
137 |
+
I also set up a Ko-Fi! I want to run some (personal) training/LASERing as well, at 100K context or so. If you'd like to buy me 10 minutes on an A100 (or 5 seconds on an MI300X), I'd appreciate it: https://ko-fi.com/alphaatlas
|
added_tokens.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"</s>": 64001,
|
3 |
+
"<s>": 64000
|
4 |
+
}
|
config.json
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "/models/models/Yi-34B-200K-RPMerge",
|
3 |
+
"architectures": [
|
4 |
+
"LlamaForCausalLM"
|
5 |
+
],
|
6 |
+
"attention_bias": false,
|
7 |
+
"attention_dropout": 0.0,
|
8 |
+
"bos_token_id": 1,
|
9 |
+
"eos_token_id": 2,
|
10 |
+
"hidden_act": "silu",
|
11 |
+
"hidden_size": 7168,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"intermediate_size": 20480,
|
14 |
+
"max_position_embeddings": 200000,
|
15 |
+
"model_type": "llama",
|
16 |
+
"num_attention_heads": 56,
|
17 |
+
"num_hidden_layers": 60,
|
18 |
+
"num_key_value_heads": 8,
|
19 |
+
"pad_token_id": 0,
|
20 |
+
"pretraining_tp": 1,
|
21 |
+
"quantization_config": {
|
22 |
+
"bits": 4,
|
23 |
+
"group_size": 128,
|
24 |
+
"modules_to_not_convert": null,
|
25 |
+
"quant_method": "awq",
|
26 |
+
"version": "gemm",
|
27 |
+
"zero_point": true
|
28 |
+
},
|
29 |
+
"rms_norm_eps": 1e-05,
|
30 |
+
"rope_scaling": null,
|
31 |
+
"rope_theta": 5000000.0,
|
32 |
+
"tie_word_embeddings": false,
|
33 |
+
"torch_dtype": "float16",
|
34 |
+
"transformers_version": "4.37.1",
|
35 |
+
"use_cache": true,
|
36 |
+
"vocab_size": 64002
|
37 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 1,
|
4 |
+
"do_sample": true,
|
5 |
+
"eos_token_id": 2,
|
6 |
+
"pad_token_id": 0,
|
7 |
+
"transformers_version": "4.37.1"
|
8 |
+
}
|
model-00001-of-00004.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:350292c70b4e994c6300dfa1b8d654fa77705b0e8d1db6f4948ef7808f80f66e
|
3 |
+
size 4975402760
|
model-00002-of-00004.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ae8f4358745bf2d5885aa2af60305d5a7330ce5c74844d3ca53ea44c0837d645
|
3 |
+
size 4988429248
|
model-00003-of-00004.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0c75c51bd076050b2f57d09e82deb3a19c6db1b2550b3d74e5467467840b2f33
|
3 |
+
size 4927413864
|
model-00004-of-00004.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1d27ae906d76ef0c8010acc27fcc916eb60464ce8c54eb9a4476cf30d68e96aa
|
3 |
+
size 4334706272
|
model.safetensors.index.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
special_tokens_map.json
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": {
|
3 |
+
"content": "<|startoftext|>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": false,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"eos_token": {
|
10 |
+
"content": "<|endoftext|>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": false,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"pad_token": {
|
17 |
+
"content": "<unk>",
|
18 |
+
"lstrip": false,
|
19 |
+
"normalized": false,
|
20 |
+
"rstrip": false,
|
21 |
+
"single_word": false
|
22 |
+
},
|
23 |
+
"unk_token": {
|
24 |
+
"content": "<unk>",
|
25 |
+
"lstrip": false,
|
26 |
+
"normalized": false,
|
27 |
+
"rstrip": false,
|
28 |
+
"single_word": false
|
29 |
+
}
|
30 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:386c49cf943d71aa110361135338c50e38beeff0a66593480421f37b319e1a39
|
3 |
+
size 1033105
|
tokenizer_config.json
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": false,
|
3 |
+
"add_eos_token": false,
|
4 |
+
"added_tokens_decoder": {
|
5 |
+
"0": {
|
6 |
+
"content": "<unk>",
|
7 |
+
"lstrip": false,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false,
|
11 |
+
"special": true
|
12 |
+
},
|
13 |
+
"1": {
|
14 |
+
"content": "<|startoftext|>",
|
15 |
+
"lstrip": false,
|
16 |
+
"normalized": false,
|
17 |
+
"rstrip": false,
|
18 |
+
"single_word": false,
|
19 |
+
"special": true
|
20 |
+
},
|
21 |
+
"2": {
|
22 |
+
"content": "<|endoftext|>",
|
23 |
+
"lstrip": false,
|
24 |
+
"normalized": false,
|
25 |
+
"rstrip": false,
|
26 |
+
"single_word": false,
|
27 |
+
"special": true
|
28 |
+
},
|
29 |
+
"64000": {
|
30 |
+
"content": "<s>",
|
31 |
+
"lstrip": false,
|
32 |
+
"normalized": false,
|
33 |
+
"rstrip": false,
|
34 |
+
"single_word": false,
|
35 |
+
"special": false
|
36 |
+
},
|
37 |
+
"64001": {
|
38 |
+
"content": "</s>",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": false,
|
42 |
+
"single_word": false,
|
43 |
+
"special": false
|
44 |
+
}
|
45 |
+
},
|
46 |
+
"bos_token": "<|startoftext|>",
|
47 |
+
"clean_up_tokenization_spaces": false,
|
48 |
+
"eos_token": "<|endoftext|>",
|
49 |
+
"legacy": false,
|
50 |
+
"model_max_length": 200000,
|
51 |
+
"pad_token": "<unk>",
|
52 |
+
"padding_side": "right",
|
53 |
+
"sp_model_kwargs": {},
|
54 |
+
"spaces_between_special_tokens": false,
|
55 |
+
"tokenizer_class": "LlamaTokenizer",
|
56 |
+
"truncation_side": "right",
|
57 |
+
"unk_token": "<unk>",
|
58 |
+
"use_default_system_prompt": false
|
59 |
+
}
|