brucethemoose commited on
Commit
56cf885
1 Parent(s): 6c93619

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -53
README.md CHANGED
@@ -14,53 +14,11 @@ tags:
14
  - exllamav2
15
  - exl2
16
  ---
17
- # RPMerge
18
- A merge of several Yi 34B models with a singular goal: 40K+ context, instruct-enhanced storytelling.
19
 
20
- Disappointed with some quirks of my previous kitchen sink merges (like token/instruct formats from various models showing up when they shouldn't), I've gone 'back to the basics' and picked a few Vicuna-format only models:
21
 
22
- - [DrNicefellow/ChatAllInOne-Yi-34B-200K-V1](https://huggingface.co/DrNicefellow/ChatAllInOne-Yi-34B-200K-V1) and [migtissera/Tess-34B-v1.5b](https://huggingface.co/migtissera/Tess-34B-v1.5b) both have excellent general instruction-following performance.
23
-
24
- - [cgato/Thespis-34b-v0.7](https://huggingface.co/cgato/Thespis-34b-v0.7) is trained on the "Username: {Input} / BotName: {Response}" format, to emphasize it in the merge (but not force it). It also seems to work for multi-character stories.
25
-
26
- - [Doctor-Shotgun/limarpv3-yi-llama-34b-lora](https://huggingface.co/Doctor-Shotgun/limarpv3-yi-llama-34b-lora) is trained on roleplaying data, but merged at a modest weight to not over emphasize it. This is the only non-vicuna model (being alpaca format), but it doesn't seem to interefere with the Vicuna format or adversely affect long-context perplexity
27
-
28
- [adamo1139/yi-34b-200k-rawrr-dpo-2](https://huggingface.co/adamo1139/yi-34b-200k-rawrr-dpo-2) the base for the limarp lora, this is base Yi gently finetuned to discourage refusals.
29
-
30
- [DrNicefellow/migtissera/Tess-M-Creative-v1.0](https://huggingface.co/migtissera/Tess-M-Creative-v1.0) and [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B) are both "undertrained" Yi models. I find they excel at raw completion performance (like long novel continuations) while still retaining some Vicuna instruct ability. This may be why some still prefer the original Tess 1.0/Capybara merge.
31
-
32
- I consider this a more "focused" merge and a possible base for a storytelling finetune/LASER in the future, once I bite the bullet and rent some A100s or a MI300. I will investigate other models (perhaps chatML models?) for a more "factual assistant" focused merge, as well as a coding-focused merge if I can't find one to suit my needs.
33
-
34
-
35
- ## Prompt template: Orca-Vicuna
36
- ```
37
- SYSTEM: {system_message}
38
- USER: {prompt}
39
- ASSISTANT:
40
- ```
41
- Raw prompting as described here is also effective: https://old.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/
42
-
43
- As well as a very explicit system prompt like this: https://old.reddit.com/r/LocalLLaMA/comments/1aiz6zu/roleplaying_system_prompts/koygiwa/
44
-
45
-
46
- ## Running
47
-
48
- Chinese models with large tokenizer vocabularies like Yi need *careful* parameter tuning due to their huge logit sampling "tails." Yi in particular also runs relatively "hot" even at lower temperatures.
49
-
50
- I am a huge fan of Kalomaze's quadratic sampling (shown as "smoothing factor" where available), as described here: https://github.com/oobabooga/text-generation-webui/pull/5403
51
-
52
- Otherwise, I recommend a lower temperature with 0.1 or higher MinP, a little repetition penalty, and mirostat with a low tau, and no other samplers. See the explanation here: https://github.com/ggerganov/llama.cpp/pull/3841
53
-
54
- 24GB GPUs can efficiently run Yi-34B-200K models at **40K-90K context** with exllamav2, and performant UIs like [exui](https://github.com/turboderp/exui). I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/). Empty 16GB GPUs can still run the high context with aggressive quantization.
55
-
56
- To load/train this in full-context backends like transformers, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM! I do not recommend running high context without context-efficient backends that support flash attention + 8 bit kv cache, like exllamav2, litellm, vllm or unsloth.
57
-
58
-
59
- ## Testing Notes
60
-
61
- See: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8#testing-notes
62
-
63
- I have tested this merge with with novel-style continuation (but not much chat-style roleplay), and some assistant-style responses and long context analysis. I haven't seen any refusals so far.
64
 
65
  ## Merge Details
66
  ### Merge Method
@@ -103,7 +61,7 @@ models:
103
  parameters:
104
  weight: 0.19
105
  density: 0.55
106
- - model: adamo1139/yi-34b-200k-rawrr-dpo-2+Doctor-Shotgun/limarpv3-yi-llama-34b-lora
107
  parameters:
108
  weight: 0.19
109
  density: 0.48
@@ -120,10 +78,3 @@ parameters:
120
  int8_mask: true
121
  dtype: bfloat16
122
  ```
123
-
124
-
125
- ## Self Promotion
126
-
127
- I'm part of a AI startup called Holocene AI!
128
-
129
- We're new, busy, and still setting things up. But if you have any business inquiries, or just want some consultation, feel free to shoot me a DM. We have expertise in RAG applications and llama/embeddings model finetuning, and absolutely *none* of the nonsense of scammy AI startups.
 
14
  - exllamav2
15
  - exl2
16
  ---
17
+ # RPmerge
 
18
 
19
+ 2.67
20
 
21
+ See the main model card: https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  ## Merge Details
24
  ### Merge Method
 
61
  parameters:
62
  weight: 0.19
63
  density: 0.55
64
+ - model: /home/alpha/Models/Raw/admo_limarp
65
  parameters:
66
  weight: 0.19
67
  density: 0.48
 
78
  int8_mask: true
79
  dtype: bfloat16
80
  ```