Marcoroni-7b-DPO-Merge
Marcoroni-7b-DPO-Merge is a merge of the following models using mergekit and inspired by Maxime Labonne's work:
𧩠Configuration
models:
- model: madatnlp/marcoroni-7b-v3-safetensor
# no parameters necessary for base model
- model: fblgit/UNA-TheBeagle-7b-v1
parameters:
density: 0.3
weight: 0.5
- model: udkai/Turdus
parameters:
density: 0.7
weight: 0.3
merge_method: ties
base_model: madatnlp/marcoroni-7b-v3-safetensor
parameters:
normalize: true
dtype: float16
π» Example Python Code
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "nfaheem/Marcoroni-7b-DPO-Merge"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
π Summary Eval:
Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
---|---|---|---|---|---|---|
74.9 | 73.04 | 88.8 | 64.24 | 70.47 | 85.24 | 67.63 |
π Huggingface Leaderboard
It's Ranked # 1 on HuggingFace Leaderboard among around 13B parameters (01/15/2024)
Model | Average | ARC | HellaSwag | MMLU | Truthful QA | Winogrande | GSM8K |
---|---|---|---|---|---|---|---|
nfaheem/Marcoroni-7b-DPO-Merge | 74.9 | 73.04 | 88.8 | 64.24 | 70.47 | 85.24 | 67.63 |
mlabonne/Beagle14-7b | 74.76 | 72.95 | 87.95 | 64.7 | 68.38 | 82.64 | 71.42 |
udkai/Turdus | 74.66 | 73.38 | 88.56 | 64.52 | 67.11 | 86.66 | 67.7 |
CultriX/MergeTrix-7B | 74.33 | 72.24 | 87.84 | 64.88 | 66.27 | 83.5 | 71.19 |
fblgit/UNA-TheBeagle-7b-v1 | 73.87 | 73.04 | 88 | 63.48 | 69.85 | 82.16 | 66.72 |
- Downloads last month
- 1,238
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.