YC-Chen commited on
Commit
d8fd1db
β€’
1 Parent(s): ec897e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -6,7 +6,7 @@ license: apache-2.0
6
 
7
 
8
 
9
- ## Performance
10
 
11
  | Models | #Parameters | Organization | License | Function Calling? | Instrustion Following? |
12
  |--------------------------------------------------------------------------------------------|-------------|------------|------------|-------------------|----------|
@@ -15,7 +15,7 @@ license: apache-2.0
15
  | [Gorilla-OpenFunctions-v2](https://huggingface.co/MediaTek-Research/Breeze-7B-FC-v1_0) | 7B | Gorilla LLM | Apache 2.0 | Yes | No |
16
  | [GPT-3.5-Turbo-0125](https://openai.com) | | OpenAI | Proprietary| Yes | Yes |
17
 
18
- πŸ“Œ **Evaluate function calling on EN benchmark**
19
 
20
  Berkeley function-calling leaderboard
21
 
@@ -27,7 +27,7 @@ Berkeley function-calling leaderboard
27
 
28
  ![](misc/radar_chart_en.png)
29
 
30
- πŸ“Œ **Evaluate function calling on ZHTW benchmark**
31
 
32
  function-calling-leaderboard-for-zhtw
33
 
@@ -40,7 +40,7 @@ function-calling-leaderboard-for-zhtw
40
  ![](misc/radar_chart_zhtw.png)
41
 
42
 
43
- πŸ“Œ **Evaluate instrustion following on EN benchmark**
44
 
45
  MT-Bench
46
 
@@ -49,7 +49,7 @@ MT-Bench
49
  | **Breeze-7B-FC-v1_0** *v.s.* Breeze-7B-Instruct-v1_0 | 25 (15.6%) | 72 (45.0%) | 63 (39.4%) |
50
 
51
 
52
- πŸ“Œ **Evaluate instrustion following on ZHTW benchmark**
53
 
54
  MT-Bench-TC
55
 
@@ -58,9 +58,9 @@ MT-Bench-TC
58
  | **Breeze-7B-FC-v1_0** *v.s.* Breeze-7B-Instruct-v1_0 | 36 (22.5%) | 81 (50.6%) | 43 (26.9%) |
59
 
60
 
61
- ## How to use
62
 
63
- πŸ“Œ **Dependiency**
64
 
65
  ```
66
  pip install mtkresearch vllm
 
6
 
7
 
8
 
9
+ ## πŸ† Performance
10
 
11
  | Models | #Parameters | Organization | License | Function Calling? | Instrustion Following? |
12
  |--------------------------------------------------------------------------------------------|-------------|------------|------------|-------------------|----------|
 
15
  | [Gorilla-OpenFunctions-v2](https://huggingface.co/MediaTek-Research/Breeze-7B-FC-v1_0) | 7B | Gorilla LLM | Apache 2.0 | Yes | No |
16
  | [GPT-3.5-Turbo-0125](https://openai.com) | | OpenAI | Proprietary| Yes | Yes |
17
 
18
+ **Evaluate function calling on EN benchmark**
19
 
20
  Berkeley function-calling leaderboard
21
 
 
27
 
28
  ![](misc/radar_chart_en.png)
29
 
30
+ **Evaluate function calling on ZHTW benchmark**
31
 
32
  function-calling-leaderboard-for-zhtw
33
 
 
40
  ![](misc/radar_chart_zhtw.png)
41
 
42
 
43
+ **Evaluate instrustion following on EN benchmark**
44
 
45
  MT-Bench
46
 
 
49
  | **Breeze-7B-FC-v1_0** *v.s.* Breeze-7B-Instruct-v1_0 | 25 (15.6%) | 72 (45.0%) | 63 (39.4%) |
50
 
51
 
52
+ **Evaluate instrustion following on ZHTW benchmark**
53
 
54
  MT-Bench-TC
55
 
 
58
  | **Breeze-7B-FC-v1_0** *v.s.* Breeze-7B-Instruct-v1_0 | 36 (22.5%) | 81 (50.6%) | 43 (26.9%) |
59
 
60
 
61
+ ## πŸ‘©β€πŸ’» How to use
62
 
63
+ **Dependiency**
64
 
65
  ```
66
  pip install mtkresearch vllm