WARNING: Not for Use - Bug INSTINST in response.
This model was merged, trained, and so on, thanks to the knowledge I gained from reading Maxime Labonne's course. Special thanks to him!
NeuTrixOmniBe-DPO
NeuTrixOmniBe-DPO is a merge of the following models using LazyMergekit:
𧩠Configuration
MODEL_NAME = "NeuTrixOmniBe-DPO"
yaml_config = """
slices:
- sources:
- model: CultriX/NeuralTrix-7B-dpo
layer_range: [0, 32]
- model: paulml/OmniBeagleSquaredMBX-v3-7B-v2
layer_range: [0, 32]
merge_method: slerp
base_model: CultriX/NeuralTrix-7B-dpo
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
"""
It was then trained with DPO using:
- Intel/orca_dpo_pairs
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuTrixOmniBe-DPO"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=128, do_sample=True, temperature=0.5, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 76.17 |
AI2 Reasoning Challenge (25-Shot) | 72.78 |
HellaSwag (10-Shot) | 89.03 |
MMLU (5-Shot) | 64.28 |
TruthfulQA (0-shot) | 77.21 |
Winogrande (5-shot) | 85.16 |
GSM8k (5-shot) | 68.54 |
- Downloads last month
- 87
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Kukedlc/NeuTrixOmniBe-DPO
Merge model
this model
Spaces using Kukedlc/NeuTrixOmniBe-DPO 5
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard72.780
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard89.030
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.280
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard77.210
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard85.160
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard68.540