Llama3-8B-SuperNova-Spectrum-dare_ties
Llama3-8B-SuperNova-Spectrum-dare_ties is a dare_ties
merge of the following models using LazyMergekit:
- yuvraj17/Llama-3-8B-spectrum-25
- ruggsea/Llama3-stanford-encyclopedia-philosophy-QA
- arcee-ai/Llama-3.1-SuperNova-Lite
DARE_TIES Merging
TIES Merging
TIES Merging, introduced by Yadav et al. (2023), is a method for merging multiple specialized models into one general-purpose model. It solves two key challenges:
- Redundancy Removal: Identifies and eliminates overlapping or unnecessary information between models, making the final model more efficient.
- Conflict Resolution: Reconciles differences between models by creating a unified sign vector that represents the most dominant direction of change across all models.
TIES stands for TRIM, ELECT SIGN & MERGE (TIES-MERGING).
DARE Merging
Introduced by Yu et al. (2023), DARE uses an approach similar to TIES with two main differences:
- Weight Pruning: Randomly resets some fine-tuned weights to their original values, reducing model complexity.
- Weight Scaling: Adjusts the remaining weights by scaling and combining them with the base model's weights to maintain consistent performance.
DARE stands for DROP AND RESCALE
Mergekitβs implementation of DARE-Merging has two flavours: with the sign election step of TIES (dare_ties
) or without (dare_linear
). I have chosen dare_ties
for this merge.
For more information refer this Merge Large Language Models with MergeKit by Maxime Labonne
Also, if you want to get in-depth knowledge about Model-Merging and its different types, I highly recommend this YouTube Video by Julien Simon
𧩠Configuration
models:
- model: NousResearch/Meta-Llama-3-8B
# No parameters necessary for base model
- model: yuvraj17/Llama-3-8B-spectrum-25
parameters:
density: 0.56
weight: 0.12
- model: ruggsea/Llama3-stanford-encyclopedia-philosophy-QA
parameters:
density: 0.56
weight: 0.12
- model: arcee-ai/Llama-3.1-SuperNova-Lite
parameters:
density: 0.58
weight: 0.55
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
dtype: bfloat16
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
A large language model is a type of artificial intelligence (AI) model designed to understand and generate human language. It is trained on a massive corpus of text data, which it uses to learn patterns and relationships between words and concepts. Large language models are typically based on a deep learning approach called transformer architecture, which was introduced by the Google research paper "Attention Is All You Need" (2017). These models are designed to handle the complexity of natural language by capturing long-range dependencies and contextual relationships between words. Large language models can perform a variety of tasks, including:
- Natural language processing (NLP): large language models can understand and generate text, and can be used for tasks such as text classification, sentiment analysis, and named entity recognition.
- Text generation: large language models can generate human-like text, such as chatbots, language translation, and text summarization.
- Question answering: large language models can answer questions based on the text they have been trained on.
- Conversational AI: large language models can be used to create conversational agents that can understand and respond to user input.
π Evaluation Scores
Nous
Model | AGIEval | TruthfulQA | Bigbench |
---|---|---|---|
Llama3-8B-SuperNova-Spectrum-dare_ties | 38.32 | 57.15 | 43.91 |
AGIEval
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
agieval_aqua_rat | 0 | acc | 20.47 | Β± | 2.54 |
acc_norm | 18.50 | Β± | 2.44 | ||
agieval_logiqa_en | 0 | acc | 35.94 | Β± | 1.88 |
acc_norm | 35.64 | Β± | 1.88 | ||
agieval_lsat_ar | 0 | acc | 21.74 | Β± | 2.73 |
acc_norm | 20.00 | Β± | 2.64 | ||
agieval_lsat_lr | 0 | acc | 41.37 | Β± | 2.18 |
acc_norm | 40.98 | Β± | 2.18 | ||
agieval_lsat_rc | 0 | acc | 59.11 | Β± | 3.00 |
acc_norm | 56.13 | Β± | 3.03 | ||
agieval_sat_en | 0 | acc | 63.59 | Β± | 3.36 |
acc_norm | 60.19 | Β± | 3.42 | ||
agieval_sat_en_without_passage | 0 | acc | 40.29 | Β± | 3.43 |
acc_norm | 37.38 | Β± | 3.38 | ||
agieval_sat_math | 0 | acc | 38.64 | Β± | 3.29 |
acc_norm | 37.73 | Β± | 3.28 |
Average: 38.32%
TruthfulQA
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
truthfulqa_mc | 1 | mc1 | 38.43 | Β± | 1.7 |
mc2 | 57.15 | Β± | 1.5 |
Average: 57.15%
Bigbench
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
bigbench_causal_judgement | 0 | multiple_choice_grade | 58.42 | Β± | 3.59 |
bigbench_date_understanding | 0 | multiple_choice_grade | 70.73 | Β± | 2.37 |
bigbench_disambiguation_qa | 0 | multiple_choice_grade | 30.23 | Β± | 2.86 |
bigbench_geometric_shapes | 0 | multiple_choice_grade | 47.35 | Β± | 2.64 |
exact_str_match | 0.00 | Β± | 0.00 | ||
bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 29.00 | Β± | 2.03 |
bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 21.00 | Β± | 1.54 |
bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 51.33 | Β± | 2.89 |
bigbench_movie_recommendation | 0 | multiple_choice_grade | 33.20 | Β± | 2.11 |
bigbench_navigate | 0 | multiple_choice_grade | 55.40 | Β± | 1.57 |
bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 66.35 | Β± | 1.06 |
bigbench_ruin_names | 0 | multiple_choice_grade | 45.76 | Β± | 2.36 |
bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 28.26 | Β± | 1.43 |
bigbench_snarks | 0 | multiple_choice_grade | 62.43 | Β± | 3.61 |
bigbench_sports_understanding | 0 | multiple_choice_grade | 50.30 | Β± | 1.59 |
bigbench_temporal_sequences | 0 | multiple_choice_grade | 48.00 | Β± | 1.58 |
bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 23.60 | Β± | 1.20 |
bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 17.66 | Β± | 0.91 |
bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 51.33 | Β± | 2.89 |
Average: 43.91%
Special thanks & Reference
- Maxime Labonne for their easy-to-use colab-notebook Merging LLMs with MergeKit, Blog and LLM-AutoEva Notebookl
- Authors of Mergekit
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 19.00 |
IFEval (0-Shot) | 40.13 |
BBH (3-Shot) | 23.49 |
MATH Lvl 5 (4-Shot) | 7.40 |
GPQA (0-shot) | 3.36 |
MuSR (0-shot) | 11.00 |
MMLU-PRO (5-shot) | 28.60 |
- Downloads last month
- 17
Model tree for yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties
Collection including yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard40.130
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard23.490
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard7.400
- acc_norm on GPQA (0-shot)Open LLM Leaderboard3.360
- acc_norm on MuSR (0-shot)Open LLM Leaderboard11.000
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard28.600