leonardlin commited on
Commit
c601b4e
1 Parent(s): cb4c9bf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -3
README.md CHANGED
@@ -1,12 +1,36 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
3
  base_model: tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1
4
  tags:
5
  - generated_from_trainer
6
- model-index:
7
- - name: outputs/basemodel-swallowmx-8x22b
8
- results: []
9
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - ja
5
+ - en
6
+ datasets:
7
+ - augmxnt/ultra-orca-boros-en-ja-v1
8
  base_model: tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1
9
  tags:
10
  - generated_from_trainer
 
 
 
11
  ---
12
+ shisa-v2 Base Model ablation
13
+
14
+ Using a [fork](https://github.com/shisa-ai/shaberi) of [Lightblue's Shaberi benchmark framework](https://github.com/lightblue-tech/japanese_llm_eval):
15
+
16
+ | Model | Average | ELYZA-tasks-100 | MT-Bench | Rakuda | Tengu-Bench |
17
+ |----------------------------------------|---------|-----------------|----------|--------|-------------|
18
+ | gpt-4-turbo-2024-04-09 | 8.75 | 8.78 | 8.74 | 9.18 | 8.31 |
19
+ | CohereForAI/c4ai-command-r-plus | 7.69 | 7.50 | 7.43 | 9.05 | 6.79 |
20
+ | karakuri-ai/karakuri-lm-70b-chat-v0.1 | 6.84 | 6.86 | 6.43 | 7.85 | 6.23 |
21
+ | lightblue/ao-karasu-72B | 6.81 | 7.19 | 6.54 | 7.25 | 6.27 |
22
+ | shisa-ai/shisa-llama3-8b-v1^ | 6.29 | 6.62 | 6.41 | 7.05 | 5.07 |
23
+ | **shisa-ai/shisa-swallowmx-13a47b-v1^**| **6.17**| **6.48** | **6.07** | **7.11**| **5.03** |
24
+ | Rakuten/RakutenAI-7B-chat | 5.58 | 5.92 | 4.60 | 6.58 | 5.24 |
25
+ | shisa-ai/shisa-gemma-7b-v1 | 5.64 | 6.50 | 5.42 | 5.10 | 5.55 |
26
+ | augmxnt/shisa-gamma-7b-v1 | 5.56 | 5.84 | 4.00 | 6.73 | 5.68 |
27
+ | lightblue/qarasu-14B-chat-plus-unleashed | 5.20 | 5.58 | 4.74 | 5.46 | 5.01 |
28
+ | cyberagent/calm2-7b-chat | 4.76 | 4.90 | 3.58 | 5.75 | 4.81 |
29
+ | mistralai/Mistral-7B-Instruct-v0.2 | 4.69 | 5.78 | 4.65 | 3.80 | 4.53 |
30
+ | shisa-ai/shisa-yi1.5-9b-v1 | 4.63 | 5.98 | 4.28 | 3.26 | 5.00 |
31
+
32
+ ^ Sampler settings: temperature 0.2, min_p 0.1, frequency_penalty 0.5
33
+
34
 
35
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
36
  should probably proofread and complete it, then remove this comment. -->