Edit model card

L3.1-70b-MeowMixV2

Meow.

L3.1-70b-MeowMixV2 is a merge of the following models using LazyMergekit running on Runpod:

Yap / Chat Format

Llama 3 Instruct.

🧩 Configuration


models:
  - model: migtissera/Tess-3-Llama-3.1-70B
    parameters:
      density: 0.7
      weight:
        - value: 0.75
  - model: HODACHI/Llama-3.1-70B-EZO-1.1-it
    parameters:
      density: 0.2
      weight:
        - value: [1, 0.75, 0.5, 0.25, 0, 0, 0, 0, 0.0, 0.5, 1]
  - model: shenzhi-wang/Llama3.1-70B-Chinese-Chat
    parameters:
      density: 0.2
      weight:
        - value: [1, 0.75, 0.5, 0.25, 0, 0, 0, 0, 0.0, 0.5, 1]
  - model: Saxo/Linkbricks-Horizon-AI-Korean-llama3.1-sft-dpo-70B
    parameters:
      density: 0.2
      weight:
        - value: [1, 0.75, 0.5, 0.25, 0, 0, 0, 0, 0.0, 0.5, 1]

merge_method: della_linear
base_model: migtissera/Tess-3-Llama-3.1-70B
parameters:
  normalize: true
dtype: bfloat16

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "KaraKaraWitch/L3.1-70b-MeowMixV2"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
116
Safetensors
Model size
70.6B params
Tensor type
BF16
Β·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for KaraKaraWitch/L3.1-70b-MeowMix2