sequelbox's picture
a6a7ff21ccdbf4a60d07572e261debb728cdce5efd76d03de131d71be74ffcf9
3440921 verified
|
raw
history blame
8.93 kB
metadata
language:
  - en
license: llama3.2
tags:
  - shining-valiant
  - shining-valiant-2
  - valiant
  - valiant-labs
  - llama
  - llama-3.2
  - llama-3.2-instruct
  - llama-3.2-instruct-3b
  - llama-3
  - llama-3-instruct
  - llama-3-instruct-3b
  - 3b
  - science
  - physics
  - biology
  - chemistry
  - compsci
  - computer-science
  - engineering
  - technical
  - conversational
  - chat
  - instruct
base_model: meta-llama/Llama-3.2-3B-Instruct
datasets:
  - sequelbox/Celestia
  - sequelbox/Spurline
  - sequelbox/Supernova
pipeline_tag: text-generation
model_type: llama
model-index:
  - name: Llama3.2-3B-ShiningValiant2
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 69.85
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: ARC Challenge (25-Shot)
          type: arc_challenge
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 46.25
            name: normalized accuracy
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU College Biology (5-shot)
          type: mmlu
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 56.25
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU High School Biology (5-shot)
          type: mmlu
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 63.55
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU College Chemistry (5-shot)
          type: mmlu
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 41
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU High School Chemistry (5-shot)
          type: mmlu
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 41.38
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU College Physics (5-shot)
          type: mmlu
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 34.31
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU High School Physics (5-shot)
          type: mmlu
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 35.76
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU College Computer Science (5-shot)
          type: mmlu
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 48
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU High School Computer Science (5-shot)
          type: mmlu
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 58
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU STEM (5-shot)
          type: mmlu
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 45.54
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 48.9
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-ShiningValiant2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 19.11
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-ShiningValiant2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 9.14
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-ShiningValiant2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 3.02
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-ShiningValiant2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 5.49
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-ShiningValiant2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 19.1
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-ShiningValiant2
          name: Open LLM Leaderboard

image/jpeg

Shining Valiant 2 is a chat model built on Llama 3.2 3b, finetuned on our data for friendship, insight, knowledge and enthusiasm.

Version

This is the 2024-09-27 release of Shining Valiant 2 for Llama 3.2 3b.

We've improved and open-sourced our new baseline science-instruct dataset. This release features improvements in physics, chemistry, biology, and computer science.

Future upgrades will continue to expand Shining Valiant's technical knowledge base.

Help us and recommend Shining Valiant 2 to your friends!

Prompting Guide

Shining Valiant 2 uses the Llama 3.2 Instruct prompt format. The example script below can be used as a starting point for general chat:

import transformers
import torch

model_id = "ValiantLabs/Llama3.2-3B-ShiningValiant2"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are an AI assistant."},
    {"role": "user", "content": "Describe the use of chiral auxiliaries in organic synthesis."}
]

outputs = pipeline(
    messages,
    max_new_tokens=2048,
)

print(outputs[0]["generated_text"][-1])

The Model

Shining Valiant 2 is built on top of Llama 3.2 3b Instruct.

The current version of Shining Valiant 2 is trained on technical knowledge using sequelbox/Celestia, complex reasoning using sequelbox/Spurline, and general chat capability using sequelbox/Supernova.

We're super excited that Shining Valiant's dataset has been fully open-sourced! She's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! Magical.

image/jpeg

Shining Valiant 2 is created by Valiant Labs.

Check out our HuggingFace page for our open-source Build Tools models, including the newest version of code-specialist Enigma!

Follow us on X for updates on our models!

We care about open source. For everyone to use.

We encourage others to finetune further from our models.