flyingllama / README.md
kevin009's picture
Adding Evaluation Results (#1)
f074a09 verified
metadata
license: apache-2.0
model-index:
  - name: flyingllama
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 24.74
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/flyingllama
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 38.35
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/flyingllama
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 26.14
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/flyingllama
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 41.6
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/flyingllama
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 50.12
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/flyingllama
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 0
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/flyingllama
          name: Open LLM Leaderboard

Model Card for kevin009/flyingllama

Model Description

kevin009/flyingllama is a language model leveraging the Llama architecture. It is tailored for text generation and various natural language processing tasks. The model features a hidden size of 1024, incorporates 24 hidden layers, and is equipped with 16 attention heads. It utilizes a vocabulary comprising 50304 tokens and is fine-tuned using the SiLU activation function.

Model Usage

This model is well-suited for tasks such as text generation, language modeling, and other natural language processing applications that require understanding and generating human-like language.

Limitations

Like any model, kevin009/flyingllama may have limitations related to its architecture and training data. Users should assess its performance for specific use cases.


Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 30.16
AI2 Reasoning Challenge (25-Shot) 24.74
HellaSwag (10-Shot) 38.35
MMLU (5-Shot) 26.14
TruthfulQA (0-shot) 41.60
Winogrande (5-shot) 50.12
GSM8k (5-shot) 0.00