Edit model card

image/png

Meta-Llama-3-225B-Instruct

Meta-Llama-3-225B-Instruct is a self-merge with meta-llama/Meta-Llama-3-70B-Instruct.

It was inspired by large merges like:

I don't recommend using it as it seems to break quite easily (but feel free to prove me wrong).

🧩 Configuration

slices:
- sources:
  - layer_range: [0, 20]
    model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
  - layer_range: [10, 30]
    model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
  - layer_range: [20, 40]
    model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
  - layer_range: [30, 50]
    model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
  - layer_range: [40, 60]
    model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
  - layer_range: [50, 70]
    model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
  - layer_range: [60, 80]
    model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
  - layer_range: [70, 90]
    model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
  - layer_range: [80, 100]
    model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
  - layer_range: [90, 110]
    model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
  - layer_range: [100, 120]
    model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
  - layer_range: [110, 130]
    model: mlabonne/Meta-Llama-3-120B-Instruct
- sources:
  - layer_range: [120, 140]
    model: mlabonne/Meta-Llama-3-120B-Instruct
merge_method: passthrough
dtype: float16

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/Meta-Llama-3-220B-Instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
4
Safetensors
Model size
225B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mlabonne/Meta-Llama-3-225B-Instruct

Finetuned
(1)
this model
Finetunes
1 model
Quantizations
1 model