0xBreath's picture
Upload folder using huggingface_hub (#1)
eb5a3cc verified
metadata
base_model: mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
library_name: transformers
license: llama3.1
tags:
  - abliterated
  - uncensored
  - mlx
model-index:
  - name: Meta-Llama-3.1-8B-Instruct-abliterated
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 73.29
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 27.13
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 6.42
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 0.89
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 3.21
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 27.81
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
          name: Open LLM Leaderboard

0xBreath/Meta-Llama-3.1-8B-Instruct-abliterated-q8-mlx

The Model 0xBreath/Meta-Llama-3.1-8B-Instruct-abliterated-q8-mlx was converted to MLX format from mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated using mlx-lm version 0.19.0.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("0xBreath/Meta-Llama-3.1-8B-Instruct-abliterated-q8-mlx")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)