RolePlayLake-7B
RolePlayLake-7B is a merge of the following models :
In my current testing RolePlayLake is Better than Silicon_Maid in RP and More Uncensored Than WestLake
I would try to only merge Uncensored Models with Baising towards Chat rather than Instruct
𧩠Configuration
slices:
- sources:
- model: SanjiWatsuki/Silicon-Maid-7B
layer_range: [0, 32]
- model: senseable/WestLake-7B-v2
layer_range: [0, 32]
merge_method: slerp
base_model: senseable/WestLake-7B-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "fhai50032/RolePlayLake-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Why I Merged WestLake and Silicon Maid
Merged WestLake and Silicon Maid for a unique blend:
- EQ-Bench Dominance: WestLake's 79.75 EQ-Bench score. (Maybe Contaminated)
- Charm and Role-Play: Silicon's explicit charm and WestLake's role-play prowess.
- Config Synergy: Supports lots of prompt format out of the gate and has a very nice synergy
Result: RolePlayLake-7B, a linguistic fusion with EQ-Bench supremacy and captivating role-play potential.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 72.54 |
AI2 Reasoning Challenge (25-Shot) | 70.56 |
HellaSwag (10-Shot) | 87.42 |
MMLU (5-Shot) | 64.55 |
TruthfulQA (0-shot) | 64.38 |
Winogrande (5-shot) | 83.27 |
GSM8k (5-shot) | 65.05 |
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for silencer107/bobik7b
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard70.560
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard87.420
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.550
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard64.380
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard83.270
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard65.050