Edit model card

πŸ§ͺ YamshadowExperiment28-7B

image/jpeg

πŸŽ‰ YamshadowExperiment28-7B is currently the best-performing 7B model on the Open LLM Leaderboard (08 Apr 24). Use it with caution, as it is likely a sign of overfitting the benchmarks.

YamshadowExperiment28-7B is an automated merge created by Maxime Labonne using the following configuration.

πŸ” Applications

This model uses a context window of 8k. I recommend using it with the Alpaca chat template (works perfectly with LM Studio).

The model can sometimes break and output a lot of "INST". From my experience, its excellent results on the Open LLM Leaderboard are probably a sign of overfitting.

⚑ Quantized models

πŸ† Evaluation

Open LLM Leaderboard

YamshadowExperiment28-7B is currently the best-performing 7B model on the Open LLM Leaderboard (08 Apr 24).

image/png

EQ-bench

Thanks to Samuel J. Paech, who kindly ran the evaluation.

image/png

Nous

Evaluation performed using LLM AutoEval. See the entire leaderboard here.

image/png

🌳 Model Family Tree

image/png

🧩 Configuration

slices:
  - sources:
      - model: automerger/YamShadow-7B
        layer_range: [0, 32]
      - model: yam-peleg/Experiment28-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: automerger/YamShadow-7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
random_seed: 0

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "automerger/YamshadowExperiment28-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
4,233
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for automerger/YamshadowExperiment28-7B

Merge model
this model
Finetunes
2 models
Merges
25 models
Quantizations
2 models

Space using automerger/YamshadowExperiment28-7B 1