Wanfq commited on
Commit
8d20685
β€’
1 Parent(s): 7ac526e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -12,7 +12,7 @@ pinned: false
12
 
13
  <div id="top" align="center">
14
 
15
- **Knowledge Fusion of Large Language Models**
16
 
17
 
18
  <h4> |<a href="https://arxiv.org/abs/2401.10491"> πŸ“‘ FuseLLM Paper @ICLR2024 </a> |
@@ -22,7 +22,7 @@ pinned: false
22
  </h4>
23
 
24
  <p align="center">
25
- <img src="https://github.com/18907305772/FuseLLM/blob/main/assets/logo.png" width="95%"> <br>
26
  </p>
27
 
28
  </div>
@@ -35,7 +35,7 @@ pinned: false
35
  - **Feb 26, 2024:** πŸ”₯πŸ”₯ We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha), [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), and [Tulu-2-DPO-70B](https://huggingface.co/allenai/tulu-2-dpo-70b), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
36
 
37
  <p align="center">
38
- <img src="https://github.com/18907305772/FuseLLM/blob/main/FuseChat/assets/fig_0.png" width="70%"> <br>
39
  </p>
40
 
41
  | Priority Models | #Params | MT-Bench | Open Source Models | #Params | MT-Bench |
 
12
 
13
  <div id="top" align="center">
14
 
15
+ <p style="font-size: 44px; font-weight: bold;">Knowledge Fusion of Large Language Models</p>
16
 
17
 
18
  <h4> |<a href="https://arxiv.org/abs/2401.10491"> πŸ“‘ FuseLLM Paper @ICLR2024 </a> |
 
22
  </h4>
23
 
24
  <p align="center">
25
+ <img src="logo.png" width="60%"> <br>
26
  </p>
27
 
28
  </div>
 
35
  - **Feb 26, 2024:** πŸ”₯πŸ”₯ We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha), [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), and [Tulu-2-DPO-70B](https://huggingface.co/allenai/tulu-2-dpo-70b), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
36
 
37
  <p align="center">
38
+ <img src="fig_0.png" width="40%"> <br>
39
  </p>
40
 
41
  | Priority Models | #Params | MT-Bench | Open Source Models | #Params | MT-Bench |