Adding Evaluation Results

#3
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -65,4 +65,17 @@ model = AutoModelForCausalLM.from_pretrained("lgaalves/tinyllama-1.1b-chat-v0.3_
65
 
66
  # Intended uses, limitations & biases
67
 
68
- You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
 
66
  # Intended uses, limitations & biases
67
 
68
+ You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral.
69
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
70
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__tinyllama-1.1b-chat-v0.3_platypus)
71
+
72
+ | Metric | Value |
73
+ |-----------------------|---------------------------|
74
+ | Avg. | 30.28 |
75
+ | ARC (25-shot) | 30.29 |
76
+ | HellaSwag (10-shot) | 55.12 |
77
+ | MMLU (5-shot) | 26.13 |
78
+ | TruthfulQA (0-shot) | 39.15 |
79
+ | Winogrande (5-shot) | 55.8 |
80
+ | GSM8K (5-shot) | 0.53 |
81
+ | DROP (3-shot) | 4.94 |