Tor-8B / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
b6f69e2 verified
|
raw
history blame
9.23 kB
metadata
language:
  - en
license: agpl-3.0
tags:
  - chat
base_model:
  - nvidia/Mistral-NeMo-Minitron-8B-Base
datasets:
  - anthracite-org/kalo-opus-instruct-22k-no-refusal
  - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
  - lodrick-the-lafted/kalo-opus-instruct-3k-filtered
  - anthracite-org/nopm_claude_writing_fixed
  - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
  - anthracite-org/kalo_opus_misc_240827
  - anthracite-org/kalo_misc_part2
License: agpl-3.0
Language:
  - En
Pipeline_tag: text-generation
Base_model: nvidia/Mistral-NeMo-Minitron-8B-Base
Tags:
  - Chat
model-index:
  - name: Tor-8B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 23.82
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Delta-Vector/Tor-8B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 31.74
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Delta-Vector/Tor-8B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 5.44
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Delta-Vector/Tor-8B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 9.84
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Delta-Vector/Tor-8B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 8.82
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Delta-Vector/Tor-8B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 30.33
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Delta-Vector/Tor-8B
          name: Open LLM Leaderboard

An earlier checkpoint of Darkens-8B using the same configuration that i felt was different enough from it's 4 epoch cousin to release, Finetuned ontop of the Prune/Distill NeMo 8B done by Nvidia, This model aims to have generally good prose and writing while not falling into claude-isms.

Quants

GGUF: https://huggingface.co/Delta-Vector/Tor-8B-GGUF

EXL2: https://huggingface.co/Delta-Vector/Tor-8B-EXL2

Prompting

Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:

"""<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""

System Prompting

I would highly recommend using Sao10k's Euryale System prompt, But the "Roleplay Simple" system prompt provided within SillyTavern will work aswell.

Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.

<Guidelines>
• Maintain the character persona but allow it to evolve with the story.
• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
• All types of outputs are encouraged; respond accordingly to the narrative.
• Include dialogues, actions, and thoughts in each response.
• Utilize all five senses to describe scenarios within {{char}}'s dialogue.
• Use emotional symbols such as "!" and "~" in appropriate contexts.
• Incorporate onomatopoeia when suitable.
• Allow time for {{user}} to respond with their own input, respecting their agency.
• Act as secondary characters and NPCs as needed, and remove them when appropriate.
• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
</Guidelines>

<Forbidden>
• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
• Repetitive and monotonous outputs.
• Positivity bias in your replies.
• Being overly extreme or NSFW when the narrative context is inappropriate.
</Forbidden>

Follow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.

Axolotl config

See axolotl config

Axolotl version: 0.4.1

base_model: Dans-DiscountModels/Mistral-NeMo-Minitron-8B-Base-ChatML
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
#liger_cross_entropy: true
liger_fused_linear_cross_entropy: true

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: PRIVATE CLAUDE LOG FILTER
    type: sharegpt
    conversation: chatml
  - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
    type: sharegpt
    conversation: chatml
  - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
    type: sharegpt
    conversation: chatml
  - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
    type: sharegpt
    conversation: chatml
  - path: anthracite-org/nopm_claude_writing_fixed
    type: sharegpt
    conversation: chatml
  - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
    type: sharegpt
    conversation: chatml
  - path: anthracite-org/kalo_opus_misc_240827
    type: sharegpt
    conversation: chatml
  - path: anthracite-org/kalo_misc_part2
    type: sharegpt
    conversation: chatml
chat_template: chatml
shuffle_merged_datasets: false
default_system_message: "You are a helpful assistant that responds to the user."
dataset_prepared_path: /workspace/data/8b-nemo-fft-data
val_set_size: 0.0
output_dir: /workspace/data/8b-nemo-fft-out

sequence_len: 16384
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true

adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:

wandb_project: 8b-nemoprune-fft
wandb_entity:
wandb_watch:
wandb_name: attempt-01
wandb_log_model:

gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00001

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint: /workspace/workspace/thing 
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 1
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.001
fsdp:
fsdp_config:
special_tokens:
  pad_token: <pad>


Credits

Thank you to Lucy Knada, Kalomaze, Kubernetes Bad and the rest of Anthracite (But not Alpin.)

Training

The training was done for 4 epochs. (This model is the 2 epoch checkpoint), I used 10 x A40s GPUs graciously provided by Kalomaze for the full-parameter fine-tuning of the model.

Built with Axolotl

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 18.33
IFEval (0-Shot) 23.82
BBH (3-Shot) 31.74
MATH Lvl 5 (4-Shot) 5.44
GPQA (0-shot) 9.84
MuSR (0-shot) 8.82
MMLU-PRO (5-shot) 30.33