Konstanta Series RP Models (successful variants)
Collection
Successfull variants of experimental merge series Konstanta models. They are pretty good!
•
3 items
•
Updated
•
1
Konstanta-7B is a merge of the following models using LazyMergekit:
This is a test merge that is supposed to improve Kunoichi by merging it with new Beagle model and PiVoT Evil, which both show good performance. Even though the model's name is in Russian, it is not really capable of properly using it, as it was not the main goal of the model.
merge_method: dare_ties
dtype: bfloat16
parameters:
int8_mask: true
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
models:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
- model: maywell/PiVoT-0.1-Evil-a
parameters:
density: 0.65
weight: 0.15
- model: mlabonne/NeuralOmniBeagle-7B-v2
parameters:
density: 0.85
weight: 0.45
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Inv/Konstanta-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 73.54 |
AI2 Reasoning Challenge (25-Shot) | 70.05 |
HellaSwag (10-Shot) | 87.50 |
MMLU (5-Shot) | 65.06 |
TruthfulQA (0-shot) | 65.43 |
Winogrande (5-shot) | 82.16 |
GSM8k (5-shot) | 71.04 |